52 research outputs found

    (Machine) Learning to Do More with Less

    Full text link
    Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (that relies on knowledge of event-by-event truth-level labels) with a recent proposal that instead utilizes class ratios as the only discriminating information provided during training. This so-called "weakly supervised" technique has access to less information than the fully supervised method and yet is still able to yield impressive discriminating power. In addition, weak supervision seems particularly well suited to particle physics since quantum mechanics is incompatible with the notion of mapping an individual event onto any single Feynman diagram. We examine the technique in detail -- both analytically and numerically -- with a focus on the robustness to issues of mischaracterizing the training samples. Weakly supervised networks turn out to be remarkably insensitive to systematic mismodeling. Furthermore, we demonstrate that the event level outputs for weakly versus fully supervised networks are probing different kinematics, even though the numerical quality metrics are essentially identical. This implies that it should be possible to improve the overall classification ability by combining the output from the two types of networks. For concreteness, we apply this technology to a signature of beyond the Standard Model physics to demonstrate that all these impressive features continue to hold in a scenario of relevance to the LHC.Comment: 32 pages, 12 figures. Example code is provided at https://github.com/bostdiek/PublicWeaklySupervised . v3: Version published in JHEP, discussion adde

    Comment on measuring the t-tbar forward-backward asymmetry at ATLAS and CMS

    Full text link
    We suggest a new possibility for ATLAS and CMS to explore the t-tbar forward-backward asymmetry measured at the Tevatron, by attempting to reconstruct t-tbar events, with one of the tops decaying semileptonically in the central region (|\eta| < 2.5) and the other decaying hadronically in the forward region (|\eta| > 2.5). For several models which give comparable Tevatron signals, we study the charge asymmetry at the LHC as a function of cuts on |\eta| and on the t-tbar invariant mass, m_{t-tbar}. We show that there is an interesting complementarity between cuts on |\eta| and m_{t-tbar} to suppress the dominant and symmetric gg -> t-tbar rate, and different combinations of cuts enhance the distinguishing power between models. This complementarity is likely to hold in other new physics scenarios as well, which affect the t-tbar cross section, so it motivates extending t-tbar reconstruction to higher |\eta|.Comment: 6 pages, 3 figures, 3 tables, v2: to match version appearing in PRD, resolution in figures improve

    Gamma-rays from Dark Showers with Twin Higgs Models

    Get PDF
    We consider a twin WIMP scenario whose twin sector contains a full dark copy of the SM hadrons, where the lightest twin particles are twin pions. By analogy to the standard WIMP paradigm, the dark matter (DM) freezes out through twin electroweak interactions, and annihilates into a dark shower of light twin hadrons. These are either stable or decay predominantly to standard model (SM) photons. We show that this 'hadrosymmetric' scenario can be consistent with all applicable astrophysical, cosmological and collider constraints. In order to decay the twin hadrons before the big-bang nucleosynthesis epoch, an additional portal between the SM and twin sector is required. In most cases we find this additional mediator is within reach of either the LHC or future intensity frontier experiments. Furthermore, we conduct simulations of the dark shower and consequent photon spectra. We find that fits of these spectra to the claimed galactic center gamma-ray excess seen by Fermi-LAT non-trivially coincide with regions of parameter space that both successfully generate the observed DM abundance and exhibit minimal fine-tuning.Comment: 45 pages, 11 figures, v2: journal version, extended discussions in Secs. III-V, references adde

    Simulating collider physics on quantum computers using effective field theories

    Full text link
    Simulating the full dynamics of a quantum field theory over a wide range of energies requires exceptionally large quantum computing resources. Yet for many observables in particle physics, perturbative techniques are sufficient to accurately model all but a constrained range of energies within the validity of the theory. We demonstrate that effective field theories (EFTs) provide an efficient mechanism to separate the high energy dynamics that is easily calculated by traditional perturbation theory from the dynamics at low energy and show how quantum algorithms can be used to simulate the dynamics of the low energy EFT from first principles. As an explicit example we calculate the expectation values of vacuum-to-vacuum and vacuum-to-one-particle transitions in the presence of a time-ordered product of two Wilson lines in scalar field theory, an object closely related to those arising in EFTs of the Standard Model of particle physics. Calculations are performed using simulations of a quantum computer as well as measurements using the IBMQ Manhattan machine.Comment: 5 pages, plus 11 pages of Supplemental Material

    Boosting H→bbˉH\to b\bar b with Machine Learning

    Full text link
    High pTp_T Higgs production at hadron colliders provides a direct probe of the internal structure of the gg→Hgg \to H loop with the H→bbˉH \to b\bar{b} decay offering the most statistics due to the large branching ratio. Despite the overwhelming QCD background, recent advances in jet substructure have put the observation of the gg→H→bbˉgg\to H \to b\bar{b} channel at the LHC within the realm of possibility. In order to enhance the sensitivity to this process, we develop a two stream convolutional neural network, with one stream acting on jet information and one using global event properties. The neural network significantly increases the discovery potential of a Higgs signal, both for high pTp_T Standard Model production as well for possible beyond the Standard Model contributions. Unlike most studies for boosted hadronically decaying massive particles, the boosted Higgs search is unique because double bb-tagging rejects nearly all background processes that do not have two hard prongs. In this context --- which goes beyond state-of-the-art two-prong tagging --- the network is studied to identify the origin of the additional information leading to the increased significance. The procedures described here are also applicable to related final states where they can be used to identify additional sources of discrimination power that are not being exploited by current techniques.Comment: 26 pages, 12 figures. v3: Updated to journal versio
    • …
    corecore